Search Results: "sre"

15 November 2021

Vincent Bernat: Git as a source of truth for network automation

The first step when automating a network is to build the source of truth. A source of truth is a repository of data that provides the intended state: the list of devices, the IP addresses, the network protocols settings, the time servers, etc. A popular choice is NetBox. Its documentation highlights its usage as a source of truth:
NetBox intends to represent the desired state of a network versus its operational state. As such, automated import of live network state is strongly discouraged. All data created in NetBox should first be vetted by a human to ensure its integrity. NetBox can then be used to populate monitoring and provisioning systems with a high degree of confidence.
When introducing Jerikan, a common feedback we got was: you should use NetBox for this. Indeed, Jerikan s source of truth is a bunch of YAML files versioned with Git.

Why Git? If we look at how things are done with servers and services, in a datacenter or in the cloud, we are likely to find users of Terraform, a tool turning declarative configuration files into infrastructure. Declarative configuration management tools like Salt, Puppet,1 or Ansible take care of server configuration. NixOS is an alternative: it combines package management and configuration management with a functional language to build virtual machines and containers. When using a Kubernetes cluster, people use Kustomize or Helm, two other declarative configuration management tools. Tapped together, these tools implement the infrastructure as code paradigm.
Infrastructure as code is an approach to infrastructure automation based on practices from software development. It emphasizes consistent, repeatable routines for provisioning and changing systems and their configuration. You make changes to code, then use automation to test and apply those changes to your systems. Kief Morris, Infrastructure as Code, O Reilly.
A version control system is a central tool for infrastructure as code. The usual candidate is Git with a source code management system like GitLab or GitHub. You get:
Traceability and visibility
Git keeps a log of all changes: what, who, why, and when. With a bit of discipline, each change is explained and self-contained. It becomes part of the infrastructure documentation. When the support team complains about a degraded experience for some customers over the last two months or so, you quickly discover this may be related to a change to an incoming policy in New York.
Rolling back
If a change is defective, it can be reverted quickly, safely, and without much effort, even if other changes happened in the meantime. The policy change at the origin of the problem spanned over three routers. Reverting this specific change and deploying the configuration let you solve the situation until you find a better fix.
Branching, reviewing, merging
When working on a new feature or refactoring some part of the infrastructure, a team member creates a branch and works on their change without interfering with the work of other members. Once the branch is ready, a pull request is created and the change is ready to be reviewed by the other team members before merging. You discover the issue was related to diverting traffic through an IX where one ISP was connected without enough capacity. You propose and discuss a fix that includes a change of the schema and the templates used to declare policies to be able to handle this case.
Continuous integration
For each change, automated tests are triggered. They can detect problems and give more details on the effect of a change. Branches can be deployed to a test infrastructure where regression tests are executed. The results can be synthesized as a comment in the pull request to help the review. You check your proposed change does not modify the other existing policies.

Why not NetBox? NetBox does not share these features. It is a database with a REST and a GraphQL API. Traceability is limited: changes are not grouped into a transaction and they are not documented. You cannot fork the database. Usually, there is one staging database to test modifications before applying them to the production database. It does not scale well and reviews are difficult. Applying the same change to the production database can be hazardous. Rolling back a change is non-trivial.

Update (2021-11) Nautobot, a fork of NetBox, will soon address this point by using Dolt, an SQL database engine allowing you to clone, branch, and merge, like a Git repository. Dolt is compatible with MySQL clients. See Nautobots, Roll Back! for a preview of this feature.

Moreover, NetBox is not usually the single source of truth. It contains your hardware inventory, the IP addresses, and some topology information. However, this is not the place you put authorized SSH keys, syslog servers, or the BGP configuration. If you also use Ansible, this information ends in its inventory. The source of truth is therefore fragmented between several tools with different workflows. Since NetBox 2.7, you can append additional data with configuration contexts. This mitigates this point. The data is arranged hierarchically but the hierarchy cannot be customized.2 Nautobot can manage configuration contexts in a Git repository, while still allowing to use of the API to fetch them. You get some additional perks, thanks to Git, but the remaining data is still in a database with a different lifecycle. Lastly, the schema used by NetBox may not fit your needs and you cannot tweak it. For example, you may have a rule to compute the IPv6 address from the IPv4 address for dual-stack interfaces. Such a relationship cannot be easily expressed and enforced in NetBox. When changing the IPv4 address, you may forget the IPv6 address. The source of truth should only contain the IPv4 address but you also want the IPv6 address in NetBox because this is your IPAM and you need it to update your DNS entries.

Why not Git? There are some limitations when putting your source of truth in Git:
  1. If you want to expose a web interface to allow an external team to request a change, it is more difficult to do it with Git than with a database. Out-of-the-box, NetBox provides a nice web interface and a permission system. You can also write your own web interface and interact with NetBox through its API.
  2. YAML files are more difficult to query in different ways. For example, looking for a free IP address is complex if they are scattered in multiple places.
In my opinion, in most cases, you are better off putting the source of truth in Git instead of NetBox. You get a lot of perks by doing that and you can still use NetBox as a read-only view, usable by other tools. We do that with an Ansible module. In the remaining cases, Git could still fit the bill. Read-only access control can be done through submodules. Pull requests can restrict write access: a bot can check the changes only modify allowed files before auto-merging. This still requires some Git knowledge, but many teams are now comfortable using Git, thanks to its ubiquity.

  1. Wikimedia manages its infrastructure with Puppet. They publish everything on GitHub. Creative Commons uses Salt. They also publish everything on GitHub. Thanks to them for doing that! I wish I could provide more real-life examples.
  2. Being able to customize the hierarchy is key to avoiding repetition in the data. For example, if switches are paired together, some data should be attached to them as a group and not duplicated on each of them. Tags can be used to partially work around this issue but you lose the hierarchical aspect.

23 October 2021

Arthur Diniz: Dropdown for GitHub workflows input parameters

Dropdown for GitHub workflows input parameters Sometimes when we look at CI/CD tools embedded within git-based software repository manager like GitHub, GitLab or Bitbucket, we ran into a lack of some features. This time me and my DevOps/SRE team were facing a pain of not being able to have the option to create drop-downs within GitHub workflows using input parameters. Although this functionality is already available on other platforms such as Bitbucket, the specific client we were working on stored the code inside GitHub. At first I thought that someone has already solved this problem somehow, but doing an extensive search on the internet I found several angry GitHub users opening requests within the Support Community and even in the stack overflow. comment-1 comment-2 comment-3 comment-5 comment-4 So I decided to create a solution for this, always thinking about simplicity and in a way that makes it easy to get this missing functionality. I started by creating an input array pattern using commas and using a tag (the selector) e.g brackets as the default value marker. Here is an example of what an input string would look like:
name: gh-action-dropdown-list-input
on:
  workflow_dispatch:
    inputs:
      environment:
        description: 'Environment'
        required: true
        default: 'dev,staging,[uat],prod'
Now the final question that would turn out to be the most complicated to deal with. How can I change the GitHub Actions interface to replace the input pattern we created earlier to a dropdown? The simplest answer I thought was to create a chrome and firefox extension that would do all this logic behind the scenes and replace the HTML input element with the selected tag containing the array values and leaving the tag value (selector) always as the default. All code was developed in pure JavaScript, open-source licensed under Apache 2.0 and available at https://github.com/arthurbdiniz/gh-action-dropdown-list-input.

Install extension Once installed, the extension is ready to use and the final result we see is the Actions interface with drop-downs. :) showcase-1 showcase-2

Configuring selectors Go to the top right corner of the browser you are using and click on the extension logo. A screen will popup with tag options. Choose the right tags for you and save it.
This action might require reloading the GitHub workflow tab. config

Have fun using drop-downs inside GitHub. If you liked this project please share this post and if possible star within the repository. Also feel free to connect with me on LinkedIn: https://www.linkedin.com/in/arthurbdiniz

References
  • https://github.community/t/can-workflow-dispatch-input-be-option-list/127338
  • https://stackoverflow.com/questions/69296314/dropdown-for-github-workflows-input-parameters

27 September 2021

Wouter Verhelst: SReview::Video is now Media::Convert

SReview, the video review and transcode tool that I originally wrote for FOSDEM 2017 but which has since been used for debconfs and minidebconfs as well, has long had a sizeable component for inspecting media files with ffprobe, and generating ffmpeg command lines to convert media files from one format to another. This component, SReview::Video (plus a number of supporting modules), is really not tied very much to the SReview webinterface or the transcoding backend. That is, the webinterface and the transcoding backend obviously use the ffmpeg handling library, but they don't provide any services that SReview::Video could not live without. It did use the configuration API that I wrote for SReview, but disentangling that turned out to be very easy. As I think SReview::Video is actually an easy to use, flexible API, I decided to refactor it into Media::Convert, and have just uploaded the latter to CPAN itself. The intent is to refactor the SReview webinterface and transcoding backend so that they will also use Media::Convert instead of SReview::Video in the near future -- otherwise I would end up maintaining everything twice, and then what's the point. This hasn't happened yet, but it will soon (this shouldn't be too difficult after all). Unfortunately Media::Convert doesn't currently install cleanly from CPAN, since I made it depend on Alien::ffmpeg which currently doesn't work (I'm in communication with the Alien::ffmpeg maintainer in order to get that resolved), so if you want to try it out you'll have to do a few steps manually. I'll upload it to Debian soon, too.

15 September 2021

Ian Jackson: Get source to Debian packages only via dgit; "official" git links are beartraps

tl;dr dgit clone sourcepackage gets you the source code, as a git tree, in ./sourcepackage. cd into it and dpkg-buildpackage -uc -b. Do not use: "VCS" links on official Debian web pages like tracker.debian.org; "debcheckout"; searching Debian's gitlab (salsa.debian.org). These are good for Debian experts only. If you use Debian's "official" source git repo links you can easily build a package without Debian's patches applied.[1] This can even mean missing security patches. Or maybe it can't even be built in a normal way (or at all). OMG WTF BBQ, why? It's complicated. There is History. Debian's "most-official" centralised source repository is still the Debian Archive, which is a system based on tarballs and patches. I invented the Debian source package format in 1992/3 and it has been souped up since, but it's still tarballs and patches. This system is, of course, obsolete, now that we have modern version control systems, especially git. Maintainers of Debian packages have invented ways of using git anyway, of course. But this is not standardised. There is a bewildering array of approaches. The most common approach is to maintain git tree containing a pile of *.patch files, which are then often maintained using quilt. Yes, really, many Debian people are still using quilt, despite having git! There is machinery for converting this git tree containing a series of patches, to an "official" source package. If you don't use that machinery, and just build from git, nothing applies the patches. [1] This post was prompted by a conversation with a friend who had wanted to build a Debian package, and didn't know to use dgit. They had got the source from salsa via a link on tracker.d.o, and built .debs without Debian's patches. This not a theoretical unsoundness, but a very real practical risk. Future is not very bright In 2013 at the Debconf in Vaumarcus, Joey Hess, myself, and others, came up with a plan to try to improve this which we thought would be deployable. (Previous attempts had failed.) Crucially, this transition plan does not force change onto any of Debian's many packaging teams, nor onto people doing cross-package maintenance work. I worked on this for quite a while, and at a technical level it is a resounding success. Unfortunately there is a big limitation. At the current stage of the transition, to work at its best, this replacement scheme hopes that maintainers who update a package will use a new upload tool. The new tool fits into their existing Debian git packaging workflow and has some benefits, but it does make things more complicated rather than less (like any transition plan must, during the transitional phase). When maintainers don't use this new tool, the standardised git branch seen by users is a compatibility stub generated from the tarballs-and-patches. So it has the right contents, but useless history. The next step is to allow a maintainer to update a package without dealing with tarballs-and-patches at all. This would be massively more convenient for the maintainer, so an easy sell. And of course it links the tarballs-and-patches to the git history in a proper machine-readable way. We held a "git packaging requirements-gathering session" at the Curitiba Debconf in 2019. I think the DPL's intent was to try to get input into the git workflow design problem. The session was a great success: my existing design was able to meet nearly everyone's needs and wants. The room was obviously keen to see progress. The next stage was to deploy tag2upload. I spoke to various key people at the Debconf and afterwards in 2019 and the code has been basically ready since then. Unfortunately, deployment of tag2upload is mired in politics. It was blocked by a key team because of unfounded security concerns; positive opinions from independent security experts within Debian were disregarded. Of course it is always hard to get a team to agree to something when it's part of a transition plan which treats their systems as an obsolete setup retained for compatibility. Current status If you don't know about Debian's git packaging practices (eg, you have no idea what "patches-unapplied packaging branch without .pc directory" means), and don't want want to learn about them, you must use dgit to obtain the source of Debian packages. There is a lot more information and detailed instructions in dgit-user(7). Hopefully either the maintainer did the best thing, or, if they didn't, you won't need to inspect the history. If you are a Debian maintainer, you should use dgit push-source to do your uploads. This will make sure that users of dgit will see a reasonable git history.
edited 2021-09-15 14:48 Z to fix a typo


comment count unavailable comments

9 September 2021

Bits from Debian: DebConf21 online closes

DebConf21 group photo - click to enlarge On Saturday 28 August 2021, the annual Debian Developers and Contributors Conference came to a close. DebConf21 has been held online for the second time, due to the coronavirus (COVID-19) disease pandemic. All of the sessions have been streamed, with a variety of ways of participating: via IRC messaging, online collaborative text documents, and video conferencing meeting rooms. With 740 registered attendees from more than 15 different countries and a total of over 70 event talks, discussion sessions, Birds of a Feather (BoF) gatherings and other activities, DebConf21 was a large success. The setup made for former online events involving Jitsi, OBS, Voctomix, SReview, nginx, Etherpad, a web-based frontend for voctomix has been improved and used for DebConf21 successfully. All components of the video infrastructure are free software, and configured through the Video Team's public ansible repository. The DebConf21 schedule included a wide variety of events, grouped in several tracks: The talks have been streamed using two rooms, and several of these activities have been held in different languages: Telugu, Portuguese, Malayalam, Kannada, Hindi, Marathi and English, allowing a more diverse audience to enjoy and participate. Between talks, the video stream has been showing the usual sponsors on the loop, but also some additional clips including photos from previous DebConfs, fun facts about Debian and short shout-out videos sent by attendees to communicate with their Debian friends. The Debian publicity team did the usual live coverage to encourage participation with micronews announcing the different events. The DebConf team also provided several mobile options to follow the schedule. For those who were not able to participate, most of the talks and sessions are already available through the Debian meetings archive website, and the remaining ones will appear in the following days. The DebConf21 website will remain active for archival purposes and will continue to offer links to the presentations and videos of talks and events. Next year, DebConf22 is planned to be held in Prizren, Kosovo, in July 2022. DebConf is committed to a safe and welcome environment for all participants. During the conference, several teams (Front Desk, Welcome team and Community team) have been available to help so participants get their best experience in the conference, and find solutions to any issue that may arise. See the web page about the Code of Conduct in DebConf21 website for more details on this. Debian thanks the commitment of numerous sponsors to support DebConf21, particularly our Platinum Sponsors: Lenovo, Infomaniak, Roche, Amazon Web Services (AWS) and Google. About Debian The Debian Project was founded in 1993 by Ian Murdock to be a truly free community project. Since then the project has grown to be one of the largest and most influential open source projects. Thousands of volunteers from all over the world work together to create and maintain Debian software. Available in 70 languages, and supporting a huge range of computer types, Debian calls itself the universal operating system. About DebConf DebConf is the Debian Project's developer conference. In addition to a full schedule of technical, social and policy talks, DebConf provides an opportunity for developers, contributors and other interested people to meet in person and work together more closely. It has taken place annually since 2000 in locations as varied as Scotland, Argentina, and Bosnia and Herzegovina. More information about DebConf is available from https://debconf.org/. About Lenovo As a global technology leader manufacturing a wide portfolio of connected products, including smartphones, tablets, PCs and workstations as well as AR/VR devices, smart home/office and data center solutions, Lenovo understands how critical open systems and platforms are to a connected world. About Infomaniak Infomaniak is Switzerland's largest web-hosting company, also offering backup and storage services, solutions for event organizers, live-streaming and video on demand services. It wholly owns its datacenters and all elements critical to the functioning of the services and products provided by the company (both software and hardware). About Roche Roche is a major international pharmaceutical provider and research company dedicated to personalized healthcare. More than 100.000 employees worldwide work towards solving some of the greatest challenges for humanity using science and technology. Roche is strongly involved in publicly funded collaborative research projects with other industrial and academic partners and have supported DebConf since 2017. About Amazon Web Services (AWS) Amazon Web Services (AWS) is one of the world's most comprehensive and broadly adopted cloud platform, offering over 175 fully featured services from data centers globally (in 77 Availability Zones within 24 geographic regions). AWS customers include the fastest-growing startups, largest enterprises and leading government agencies. About Google Google is one of the largest technology companies in the world, providing a wide range of Internet-related services and products such as online advertising technologies, search, cloud computing, software, and hardware. Google has been supporting Debian by sponsoring DebConf for more than ten years, and is also a Debian partner sponsoring parts of Salsa's continuous integration infrastructure within Google Cloud Platform. Contact Information For further information, please visit the DebConf21 web page at https://debconf21.debconf.org/ or send mail to press@debian.org.

10 August 2021

Shirish Agarwal: BBI, IP report, State Borders and Civil Aviation I

If I have seen further, it is by standing on the shoulders of Giants Issac Newton, 1675. Although it should be credited to 12th century Bernard of Chartres. You will know why I have shared this, probably at the beginning of Civil Aviation history itself.

Comments on the BBI court case which happened in Kenya, then and the subsequent appeal. I am not going to share much about the coverage of the BBI appeal as Gautam Bhatia has shared quite eloquently his observations, both on the initial case and the subsequent appeal which lasted 5 days in Kenya and was shown all around the world thanks to YouTube. One of the interesting points which stuck with me was that in Kenya, sign language is one of the official languages. And in fact, I was able to read quite a bit about the various sign languages which are there in Kenya. It just boggles the mind that there are countries that also give importance to such even though they are not as rich or as developed as we call developed economies. I probably might give more space and give more depth as it does carry some important judicial jurisprudence which is and which will be felt around the world. How does India react or doesn t is probably another matter altogether  But yes, it needs it own space, maybe after some more time. Report on Standing Committee on IP Regulation in India and the false promises. Again, I do not want to take much time in sharing details about what the report contains, as the report can be found here. I have uploaded it on WordPress, in case of an issue. An observation on the same subject can be found here. At least, to me and probably those who have been following the IP space as either using/working on free software or even IP would be aware that the issues shared have been known since 1994. And it does benefit the industry rather than the country. This way, the rent-seekers, and monopolists win. There is ample literature that shared how rich countries had weak regulation for decades and even centuries till it was advantageous for them to have strong IP. One can look at the history of Europe and the United States for it. We can also look at the history of our neighbor China, which for the last 5 decades has used some provision of IP and disregarded many others. But these words are of no use, as the policies done and shared are by the rich for the rich.

Fighting between two State Borders Ironically or because of it, two BJP ruled states Assam and Mizoram fought between themselves. In which 6 policemen died. While the history of the two states is complicated it becomes a bit more complicated when one goes back into Assam and ULFA history and comes to know that ULFA could not have become that powerful until and unless, the Marwaris, people of my clan had not given generous donations to them. They thought it was a good investment, which later would turn out to be untrue. Those who think ULFA has declined, or whatever, still don t have answers to this or this. Interestingly, both the Chief Ministers approached the Home Minister (Mr. Amit Shah) of BJP. Mr. Shah was supposed to be the Chanakya but in many instances, including this one, he decided to stay away. His statement was on the lines of you guys figure it out yourself. There is a poem that was shared by the late poet Rahat Indori. I am sharing the same below as an image and will attempt to put a rough translation.
kisi ke baap ka hindustan todi hain Rahat Indori
Poets, whether in India or elsewhere, are known to speak truth to power and are a bit of a rebel. This poem by Rahat Indori is provocatively titled Kisi ke baap ka Hindustan todi hai , It challenges the majoritarian idea that Hindustan/India only belongs to the majoritarian religion. He also challenges as well as asserts at the same time that every Indian citizen, regardless of whatever his or her religion might be, is an Indian and can assert India as his home. While the whole poem is compelling in itself, for me what hits home is in the second stanza

:Lagegi Aag to aayege ghat kayi zad me, Yaha pe sirf hamara makan todi hai The meaning is simple yet subtle, he uses Aag or Fire as a symbol of hate sharing that if hate spreads, it won t be his home alone that will be torched. If one wants to literally understand what he meant, I present to you the cult Russian movie No Escapes or Ogon as it is known in Russian. If one were to decipher why the Russian film doesn t talk about climate change, one has to view it from the prism of what their leader Vladimir Putin has said and done over the years. As can be seen even in there, the situation is far more complex than one imagines. Although, it is interesting to note that he decried Climate change as man-made till as late as last year and was on the side of Trump throughout his presidency. This was in 2017 as well as perhaps this. Interestingly, there was a change in tenor and note just a couple of weeks back, but that could be only politicking or much more. Statements that are not backed by legislation and application are usually just a whitewash. We would have to wait to see what concrete steps are taken by Putin, Kremlin, and their Duma before saying either way.

Civil Aviation and the broad structure Civil Aviation is a large topic and I would not be able to do justice to it all in one article/blog post. So, for e.g. I will not be getting into Aircraft (Boeing, Airbus, Comac etc., etc.) or the new electric aircraft as that will just make the blog post long. I will not be also talking about cargo or Visa or many such topics, as all of them actually would and do need their own space. So this would be much more limited to Airports and to some extent airlines, as one cannot survive without the other. The primary reason for doing this is there is and has been a lot of myth-making in India about Civil Aviation in general, whether it has to do with Civil Aviation history or whatever passes as of policy in India.

A little early history Man has always looked at the stars and envisaged himself or herself as a bird, flying with gay abandon. In fact, there have been many paintings, sculptors who imagined how we would fly. The Steam Engine itself was invented in 82 BCE. But the attempt to fly was done by a certain Monk called Brother Elmer of Malmesbury who attempted the same in 1010., shortly after the birth of the rudimentary steam engine The most famous of all would be Leonardo da Vinci for his amazing sketches of flying machines in 1493. There were a couple of books by Cyrano de Bergerac, apparently wrote two books, both sadly published after his death. Interestingly, you can find both the book and the gentleman in the Project Gutenberg archives. How much of M/s Cyrano s exploits were his own and how much embellished by M/S Curtis, maybe a friend, a lover who knows, but it does give the air of the swashbuckling adventurer of the time which many men aspired to in that time. So, why not an author???

L Autre Monde: ou les tats et Empires de la Lune (Comical History of the States and Empires of the Moon) and Les tats et Empires du Soleil (The States and Empires of the Sun). These two French books apparently had a lot of references to flying machines. Both of them were authored by Cyrano de Bergerac. Both of these were sadly published after his death, one apparently in 1656 and the other one a couple of years later. By the 17th century, while it had become easy to know and measure the latitude, measuring longitude was a problem. In fact, it can be argued and probably successfully that India wouldn t have been under British rule or UK wouldn t have been a naval superpower if it hadn t solved the longitudinal problem. Over the years, the British Royal Navy suffered many blows, one of the most famous or infamous among them might be the Scilly naval disaster of 1707 which led to the death of 2000 odd British Royal naval personnel and led to Queen Anne, who was ruling over England at that time via Parliament and called it the Longitude Act which basically was an open competition for anybody to fix the problem and carried the prize money of 20,000. While nobody could claim the whole prize, many did get smaller amounts depending upon the achievements. The best and the nearest who came was John Harrison who made the first sea-watch and with modifications, over the years it became miniaturized to a pocket-sized Marine chronometer although, I doubt the ones used today look anything in those days. But if that had not been invented, we surely would have been freed long ago. The assumption being that the East India Company would have dashed onto rocks so many times, that the whole exercise would have been futile. The downside of it is that maritime trade routes that are being used today and the commerce would not have been. Neither would have aircraft or space for that matter, or at the very least delayed by how many years or decades, nobody knows. If one wants to read about the Longitudinal problem, one can get the famous book Longitude .

In many mythologies, including Indian and Arabian tales, in which we had the flying carpet which would let its passengers go from one place to the next. Then there is also mention of Pushpak Vimana in ancient texts, but those secrets remain secrets. Think how much foreign exchange India could make by both using it and exporting the same worldwide. And I m being serious. There are many who believe in it, but sadly, the ones who know the secret don t seem to want India s progress. Just think of the carbon credits that India could have, which itself would make India a superpower. And I m being serious.

Western Ideas and Implementation. Even in the late and early 18th century, there were many machines that were designed to have controlled flight, but it was only the Wright Flyer that was able to demonstrate a controlled flight in 1903. The ones who came pretty close to what the Wrights achieved were the people by the name of Cayley and Langley. They actually studied what the pioneers had done. They looked at what Otto Lilienthal had done, as he had done a lot of hang-gliding and put a lot of literature in the public domain then.

Furthermore, they also consulted Octave Chanute. The whole system and history of the same are a bit complicated, but it does give a window to what happened then. So, it won t be wrong to say that whatever the Wright Brothers accomplished would probably not have been possible or would have taken years or maybe even decades if that literature and experiments, drawings, etc. in the commons were not available. So, while they did experimentation, they also looked at what other people were doing and had done which was in public domain/commons.

They also did a lot of testing, which gave them new insights. Even the propulsion system they used in the 1903 flight was a design by Nicolaus Otto. In fact, the Aircraft would not have been born if the Chinese had not invented kites in the early sixth century A.D. One also has to credit Issac Newton because of the three laws of motion, again without which none of the above could have happened. What is credited to the Wilbur brothers is not just they made the Kitty Hawk, they also made it commercial as they sold it and variations of the design to the American Air Force and also made a pilot school where pilots were trained for warfighting. 119 odd pilots came out of that school. The Wrights thought that air supremacy would end the war early, but this turned out to be a false hope.

Competition and Those Magnificent Men and their flying machines One of the first competitions to unlock creativity was the English Channel crossing offer made by Daily Mail. This was successfully done by the Frenchman Louis Bl riot. You can read his account here. There were quite a few competitions before World War 1 broke out. There is a beautiful, humorous movie that does dedicate itself to imagining how things would have gone in that time. In fact, there have been two movies, this one and an earlier movie called Sky Riders made many a youth dream. The other movie sadly is not yet in the public domain, and when it will be nobody knows, but if you see it or even read it, it gives you goosebumps.

World War 1 and Improvements to Aircraft World War 1 is remembered as the Great War or the War to end all wars in an attempt at irony. It did a lot of destruction of both people and property, and in fact, laid the foundation of World War 2. At the same time, if World War 1 hadn t happened then Airpower, Plane technology would have taken decades. Even medicine and medical techniques became revolutionary due to World War 1. In order to be brief, I am not sharing much about World War 1 otherwise that itself would become its own blog post. And while it had its heroes and villains who, when, why could be tackled perhaps another time.

The Guggenheim Family and the birth of Civil Aviation If one has to credit one family for the birth of the Civil Aviation, it has to be the Guggenheim family. Again, I would not like to dwell much as much of their contribution has already been noted here. There are quite a few things still that need to be said and pointed out. First and foremost is the fact that they made lessons about flying from grade school to college and afterward till college and beyond which were in the syllabus, whereas in the Indian schooling system, there is nothing like that to date. Here, in India, even in Engineering courses, you don t have much info. Unless until you go for professional Aviation or Aeronautical courses and most of these courses cost a bomb so either the very rich or the very determined (with loans) only go for that, at least that s what my friends have shared. And there is no guarantee you will get a job after that, especially in today s climate. Even their fund, grants, and prizes which were given to people for various people so that improvements could be made to the United States Civil Aviation. This, as shared in the report/blog post shared, was in response to what the younger child/brother saw as Europe having a large advantage both in Military and Civil Aviation. They also made several grants in several Universities which would not only do notable work during their lifetime but carry on the legacy researching on different aspects of Aircraft. One point that should be noted is that Europe was far ahead even then of the U.S. which prompted the younger son. There had already been talks of civil/civilian flights on European routes, although much different from what either of us can imagine today. Even with everything that the U.S. had going for her and still has, Europe is the one which has better airports, better facilities, better everything than the U.S. has even today. If you look at the lists of the Airports for better value of money or facilities, you would find many Airports from Europe, some from Asia, and only a few from the U.S. even though they are some of the most frequent users of the service. But that debate and arguments I would have to leave for perhaps the next blog post as there is still a lot to be covered between the 1930s, 1950s, and today. The Guggenheims archives does a fantastic job of sharing part of the story till the 1950s, but there is also quite a bit which it doesn t. I will probably start from that in the next blog post and then carry on ahead. Lastly, before I wind up, I have to share why I felt the need to write, capture and share this part of Aviation history. The plain and simple reason being, many of the people I meet either on the web, on Twitter or even in real life, many of them are just unaware of how this whole thing came about. The unawareness in my fellow brothers and sisters is just shocking, overwhelming. At least, by sharing these articles, I at least would be able to guide them or at least let them know how it all came to be and where things are going and not just be so clueless. Till later.

29 June 2021

Louis-Philippe V ronneau: New Winter Bicycle

Although I'm about 5 months early and we're in the middle of a terrible heat wave where I live, I've just finished building my new winter bicycle! I've been riding all year round for about 8 years now and the winters in Montreal are harsh enough that a second winter-ready bicycle is required to have a fun and safe cold season. I'm saying "new bicycle" because a few months ago I totaled the frame of my previous winter bike. As you can see in the picture below (please disregard the salt crust), I broke the seat tube at the lug. I didn't notice it at first and while riding I kept hearing a "bang bang" sound coming from the frame when going over bumps. To my horror, I eventually realised the sound was coming from the seat tube hitting the bottom bracket shell! A large crack in my old bike frame I'm sad to see this Lejeune French frame go, but it was probably built in the 70's or the early 80's: it had a good life. Another great example of how a high quality steel frame can last a lifetime. Sparing no expense, I decided to replace it by a brand new Surly Cross-Check frameset. Surly is known for making great bike frames and the Cross-Check is a very versatile model. I'll finally be able to fit my Schwalbe Marathon Winter Plus tires at the front and the back with full length mud guards!!! Hard to ask for more. A picture of my new winter bike with CX Pro tires Summer has just started and I'm already looking forward to winter :)

Arturo Borrero Gonz lez: Last couple of talks

Logos In the last few months I presented several talks. Topics ranged from a round table on free software, to sharing some of my work as SRE in the Cloud Services team at the Wikimedia Foundation. For some of them the videos are now published, so I would like to write a reference here, mostly as a way to collect such events for my own record. Isn t that what a blog is all about, after all? Before you continue reading, let me mention that the two talks I ll reference were given in my native Spanish. The videos are hosted on YouTube and autogenerated subtitles should be available, with doubtful quality though. Also, there was at least one additional private talk that I m not allowed to comment on here today. I was invited to participate in a Docker community event called Kroquecon, which was aimed at pushing the spanish-speaking Kubernetes community around the world. The event name is a word play with Kubernetes , conference and croqueta , typical Spanish food. The talk happened on 2021-04-29, and I was part of a round table about free software, communities and how to join and participate in such projects. I commented on my experience in both the Debian project, Netfilter and my several years in Google Summer of Code (3 as student, 2 as mentor). Video of the event:

The other event was the CNCF-supported Kubernetes Community Days Spain (KCD Spain). During Kroquecon I was encouraged to submit a talk proposal for this event, to talk about something related to our use of Kubernetes in the Wikimedia Cloud Services team at the Wikimedia Foundation. The proposal was originally rejected. Then, a couple of weeks before the event itself, I was contacted by the organizers with a greenlight to give it because the other speaker couldn t make it. My coworker David Caro joined me in the presentation. It was titled Conoce Wikimedia Toolforge, plataforma basada en Kubernetes (or Meet Wikimedia Toolforge, Kubernetes-based platform ). We explained what Wikimedia Cloud Services is, focusing on Toolforge, and in particular how we use Kubernetes to enable the platform s most interesting use cases. We covered several interesting topics, including how we handle multi-tenancy, or the problems we had with the etcd & ceph combo. The slides we used are available. Video of the event:

EOF

14 June 2021

Sergio Durigan Junior: I am not on Freenode anymore

This is a quick public announcement to say that I am not on the Freenode IRC network anymore. My nickname (sergiodj), which was more than a decade old, has just been deleted (along with every other nickname and channel in that network) from their database today, 2021-06-14. For your safety, you should assume that everybody you knew at Freenode is not there either, even if you see their nicknames online. Do not trust without verifying. In fact, I would strongly encourage that you do not join Freenode anymore: their new policies are absolutely questionable and their disregard for their users is blatant. If you would like to chat with me, you can find me at OFTC (preferred) and Libera.

27 May 2021

Wouter Verhelst: SReview and pandemics

The pandemic was a bit of a mess for most FLOSS conferences. The two conferences that I help organize -- FOSDEM and DebConf -- are no exception. In both conferences, I do essentially the same work: as a member of both video teams, I manage the postprocessing of the video recordings of all the talks that happened at the respective conference(s). I do this by way of SReview, the online video review and transcode system that I wrote, which essentially crowdsources the manual work that needs to be done, and automates as much as possible of the workflow. The original version of SReview consisted of a database, a (very basic) Mojolicious-based webinterface, and a bunch of perl scripts which would build and execute ffmpeg command lines using string interpolation. As a quick hack that I needed to get working while writing it in my spare time in half a year, that approach was workable and resulted in successful postprocessing after FOSDEM 2017, and a significant improvement in time from the previous years. However, I did not end development with that, and since then I've replaced the string interpolation by an object oriented API for generating ffmpeg command lines, as well as modularized the webinterface. Additionally, I've had help reworking the user interface into a system that is somewhat easier to use than my original interface, and have slowly but surely added more features to the system so as to make it more flexible, as well as support more types of environments for the system to run in. One of the major issues that still remains with SReview is that the administrator's interface is pretty terrible. I had been planning on revamping that for 2020, but then massive amounts of people got sick, travel was banned, and both the conferences that I work on were converted to an online-only conference. These have some very specific requirements; e.g., both conferences allowed people to upload a prerecorded version of their talk, rather than doing the talk live; since preprocessing a video is, technically, very similar to postprocessing it, I adapted SReview to allow people to upload a video file that it would then validate (in terms of length, codec, and apparent resolution). This seems like easy to do, but I decided to implement this functionality so that it would also allow future use for in-person conferences, where occasionally a speaker requests that modifications would be made to the video file in a way that SReview is unable to do. This made it marginally more involved, but at least will mean that a feature which I had planned to implement some years down the line is now already implemented. The new feature works quite well, and I'm happy I've implemented it in the way that I have. In order for the "upload" processing and the "post-event" processing to not be confused, however, I decided to import the conference schedules twice: once as the conference itself, and once as a shadow version of that conference for the prerecordings. That way, I could track the progress through the system of the prerecording completely separately from the progress of the postprocessing of the video (which adds opening/closing credits, and transcodes to multiple variants of the same video). Schedule parsing was something that had not been implemented in a generic way yet, however; since that made doubling the schedule in that way rather complex, I decided to bite the bullet and (finally) implement schedule parsing in a generic way. Currently, schedule parsers exist for two formats (Pentabarf XML and the Wafer variant of that same format which is almost, but not quite, entirely the same). The API for that is quite flexible, and I'm happy with the way things have been implemented there. I've also implemented a set of "virtual" parsers, which allow mangling the schedule in various ways (by either filtering out talks that we don't want, or by generating the shadow version of the schedule that I talked about earlier). While the SReview settings have reasonable defaults, occasionally the output of SReview is not entirely acceptable, due to more complicated matters that then result in encoding artifacts. As a result, the DebConf video team has been doing a final review step, completely outside of SReview, to ensure that such encoding artifacts don't exist. That seemed suboptimal, so recently I've been working on integrating that into SReview as well. First tests have been run, and seem to be acceptable, but there's still a few loose ends to be finalized. As part of this, I've also reworked the way comments could be entered into the system. Previously the presence of a comment would signal that the video has some problems that an administrator needed to look at. Unfortunately, that was causing some confusion, with some people even thinking it's a good place to enter a "thank you for your work" style of comment... which it obviously isn't. Turning it into a "comment log" system instead fixes that, and also allows for better two-way communication between administrators and reviewers. Hopefully that'll improve things in that area as well. Finally, the audio normalization in SReview -- for which I've long used bs1770gain -- is having problems. First of all, bs1770gain will sometimes alter the timing of the video or audio file that it's passed, which is very problematic if I want to process it further. There is an ffmpeg loudnorm filter which implements the same algorithm, so that should make things easier to use. Secondly, the author of bs1770gain is a strange character that I'd rather not be involved with. Before I knew about the loudnorm filter I didn't really have a choice, but now I can just rip bs1770gain out and replace it by the loudnorm filter. That will fix various other bugs in SReview, too, because SReview relies on behaviour that isn't actually there (but which I didn't know at the time when I wrote it). All in all, the past year-and-a-bit has seen a lot of development for SReview, with multiple features being added and a number of long-standing problems being fixed. Now if only the pandemic would subside, allowing the whole "let's do everything online only" wave to cool down a bit, so that I can finally make time to implement the admin interface...

25 May 2021

Vincent Bernat: Jerikan+Ansible: a configuration management system for network

There are many resources for network automation with Ansible. Most of them only expose the first steps or limit themselves to a narrow scope. They give no clue on how to expand from that. Real network environments may be large, versatile, heterogeneous, and filled with exceptions. The lack of real-world examples for Ansible deployments, unlike Puppet and SaltStack, leads many teams to build brittle and incomplete automation solutions. We have released under an open-source license our attempt to tackle this problem: Here is a quick demo to configure a new peering:
This work is the collective effort of C dric Hasco t, Jean-Christophe Legatte, Lo c Pailhas, S bastien Hurtel, Tchadel Icard, and Vincent Bernat. We are the network team of Blade, a French company operating Shadow, a cloud-computing product. In May 2021, our company was bought by Octave Klaba and the infrastructure is being transferred to OVHcloud, saving Shadow as a product, but making our team redundant. Our network was around 800 devices, spanning over 10 datacenters with more than 2.5 Tbps of available egress bandwidth. The released material is therefore a substantial example of managing a medium-scale network using Ansible. We have left out the handling of our legacy datacenters to make the final result more readable while keeping enough material to not turn it into a trivial example.

Jerikan The first component is Jerikan. As input, it takes a list of devices, configuration data, templates, and validation scripts. It generates a set of configuration files for each device. Ansible could cover this task, but it has the following limitations:
  • it is slow;
  • errors are difficult to debug;1 and
  • the hierarchy to look up a variable is rigid.
Jerikan inputs and outputs
Jerikan inputs and outputs
If you want to follow the examples, you only need to have Docker and Docker Compose installed. Clone the repository and you are ready!

Source of truth We use YAML files, versioned with Git, as the single source of truth instead of using a database, like NetBox, or a mix of a database and text files. This provides many advantages:
  • anyone can use their preferred text editor;
  • the team prepares changes in branches;
  • the team reviews changes using merge requests;
  • the merge requests expose the changes to the generated configuration files;
  • rollback to a previous state is easy; and
  • it is fast.
The first file is devices.yaml. It contains the device list. The second file is classifier.yaml. It defines a scope for each device. A scope is a set of keys and values. It is used in templates and to look up data associated with a device.
$ ./run-jerikan scope to1-p1.sk1.blade-group.net
continent: apac
environment: prod
groups:
- tor
- tor-bgp
- tor-bgp-compute
host: to1-p1.sk1
location: sk1
member: '1'
model: dell-s4048
os: cumulus
pod: '1'
shorthost: to1-p1
The device name is matched against a list of regular expressions and the scope is extended by the result of each match. For to1-p1.sk1.blade-group.net, the following subset of classifier.yaml defines its scope:
matchers:
  - '^(([^.]*)\..*)\.blade-group\.net':
      environment: prod
      host: '\1'
      shorthost: '\2'
  - '\.(sk1)\.':
      location: '\1'
      continent: apac
  - '^to([12])-[as]?p(\d+)\.':
      member: '\1'
      pod: '\2'
  - '^to[12]-p\d+\.':
      groups:
        - tor
        - tor-bgp
        - tor-bgp-compute
  - '^to[12]-(p ap)\d+\.sk1\.':
      os: cumulus
      model: dell-s4048
The third file is searchpaths.py. It describes which directories to search for a variable. A Python function provides a list of paths to look up in data/ for a given scope. Here is a simplified version:2
def searchpaths(scope):
    paths = [
        "host/ scope[location] / scope[shorthost] ",
        "location/ scope[location] ",
        "os/ scope[os] - scope[model] ",
        "os/ scope[os] ",
        'common'
    ]
    for idx in range(len(paths)):
        try:
            paths[idx] = paths[idx].format(scope=scope)
        except KeyError:
            paths[idx] = None
    return [path for path in paths if path]
With this definition, the data for to1-p1.sk1.blade-group.net is looked up in the following paths:
$ ./run-jerikan scope to1-p1.sk1.blade-group.net
[ ]
Search paths:
  host/sk1/to1-p1
  location/sk1
  os/cumulus-dell-s4048
  os/cumulus
  common
Variables are scoped using a namespace that should be specified when doing a lookup. We use the following ones:
  • system for accounts, DNS, syslog servers,
  • topology for ports, interfaces, IP addresses, subnets,
  • bgp for BGP configuration
  • build for templates and validation scripts
  • apps for application variables
When looking up for a variable in a given namespace, Jerikan looks for a YAML file named after the namespace in each directory in the search paths. For example, if we look up a variable for to1-p1.sk1.blade-group.net in the bgp namespace, the following YAML files are processed: host/sk1/to1-p1/bgp.yaml, location/sk1/bgp.yaml, os/cumulus-dell-s4048/bgp.yaml, os/cumulus/bgp.yaml, and common/bgp.yaml. The search stops at the first match. The schema.yaml file allows us to override this behavior by asking to merge dictionaries and arrays across all matching files. Here is an excerpt of this file for the topology namespace:
system:
  users:
    merge: hash
  sampling:
    merge: hash
  ansible-vars:
    merge: hash
  netbox:
    merge: hash
The last feature of the source of truth is the ability to use Jinja2 templates for keys and values by prefixing them with ~ :
# In data/os/junos/system.yaml
netbox:
  manufacturer: Juniper
  model: "~  model upper  "
# In data/groups/tor-bgp-compute/system.yaml
netbox:
  role: net_tor_gpu_switch
Looking up for netbox in the system namespace for to1-p2.ussfo03.blade-group.net yields the following result:
$ ./run-jerikan scope to1-p2.ussfo03.blade-group.net
continent: us
environment: prod
groups:
- tor
- tor-bgp
- tor-bgp-compute
host: to1-p2.ussfo03
location: ussfo03
member: '1'
model: qfx5110-48s
os: junos
pod: '2'
shorthost: to1-p2
[ ]
Search paths:
[ ]
  groups/tor-bgp-compute
[ ]
  os/junos
  common
$ ./run-jerikan lookup to1-p2.ussfo03.blade-group.net system netbox
manufacturer: Juniper
model: QFX5110-48S
role: net_tor_gpu_switch
This also works for structured data:
# In groups/adm-gateway/topology.yaml
interface-rescue:
  address: "~  lookup('topology', 'addresses').rescue  "
  up:
    - "~ip route add default via   lookup('topology', 'addresses').rescue ipaddr('first_usable')   table rescue"
    - "~ip rule add from   lookup('topology', 'addresses').rescue ipaddr('address')   table rescue priority 10"
# In groups/adm-gateway-sk1/topology.yaml
interfaces:
  ens1f0: "~  lookup('topology', 'interface-rescue')  "
This yields the following result:
$ ./run-jerikan lookup gateway1.sk1.blade-group.net topology interfaces
[ ]
ens1f0:
  address: 121.78.242.10/29
  up:
  - ip route add default via 121.78.242.9 table rescue
  - ip rule add from 121.78.242.10 table rescue priority 10
When putting data in the source of truth, we use the following rules:
  1. Don t repeat yourself.
  2. Put the data in the most specific place without breaking the first rule.
  3. Use templates with parsimony, mostly to help with the previous rules.
  4. Restrict the data model to what is needed for your use case.
The first rule is important. For example, when specifying IP addresses for a point-to-point link, only specify one side and deduce the other value in the templates. The last rule means you do not need to mimic a BGP YANG model to specify BGP peers and policies:
peers:
  transit:
    cogent:
      asn: 174
      remote:
        - 38.140.30.233
        - 2001:550:2:B::1F9:1
      specific-import:
        - name: ATT-US
          as-path: ".*7018$"
          lp-delta: 50
  ix-sfmix:
    rs-sfmix:
      monitored: true
      asn: 63055
      remote:
        - 206.197.187.253
        - 206.197.187.254
        - 2001:504:30::ba06:3055:1
        - 2001:504:30::ba06:3055:2
    blizzard:
      asn: 57976
      remote:
        - 206.197.187.42
        - 2001:504:30::ba05:7976:1
      irr: AS-BLIZZARD

Templates The list of templates to compile for each device is stored in the source of truth, under the build namespace:
$ ./run-jerikan lookup edge1.ussfo03.blade-group.net build templates
data.yaml: data.j2
config.txt: junos/main.j2
config-base.txt: junos/base.j2
config-irr.txt: junos/irr.j2
$ ./run-jerikan lookup to1-p1.ussfo03.blade-group.net build templates
data.yaml: data.j2
config.txt: cumulus/main.j2
frr.conf: cumulus/frr.j2
interfaces.conf: cumulus/interfaces.j2
ports.conf: cumulus/ports.j2
dhcpd.conf: cumulus/dhcp.j2
default-isc-dhcp: cumulus/default-isc-dhcp.j2
authorized_keys: cumulus/authorized-keys.j2
motd: linux/motd.j2
acl.rules: cumulus/acl.j2
rsyslog.conf: cumulus/rsyslog.conf.j2
Templates are using Jinja2. This is the same engine used in Ansible. Jerikan ships some custom filters but also reuse some of the useful filters from Ansible, notably ipaddr. Here is an excerpt of templates/junos/base.j2 to configure DNS and NTP servers on Juniper devices:
system  
  ntp  
 % for ntp in lookup("system", "ntp") % 
    server   ntp  ;
 % endfor % 
   
  name-server  
 % for dns in lookup("system", "dns") % 
      dns  ;
 % endfor % 
   
 
The equivalent template for Cisco IOS-XR is:
 % for dns in lookup('system', 'dns') % 
domain vrf VRF-MANAGEMENT name-server   dns  
 % endfor % 
!
 % for syslog in lookup('system', 'syslog') % 
logging   syslog   vrf VRF-MANAGEMENT
 % endfor % 
!
There are three helper functions provided:
  • devices() returns the list of devices matching a set of conditions on the scope. For example, devices("location==ussfo03", "groups==tor-bgp") returns the list of devices in San Francisco in the tor-bgp group. You can also omit the operator if you want the specified value to be equal to the one in the local scope. For example, devices("location") returns devices in the current location.
  • lookup() does a key lookup. It takes the namespace, the key, and optionally, a device name. If not provided, the current device is assumed.
  • scope() returns the scope of the provided device.
Here is how you would define iBGP sessions between edge devices in the same location:
 % for neighbor in devices("location", "groups==edge") if neighbor != device % 
   % for address in lookup("topology", "addresses", neighbor).loopback tolist % 
protocols bgp group IPV  address ipv  -EDGES-IBGP  
  neighbor   address    
    description "IPv  address ipv  : iBGP to   neighbor  ";
   
 
   % endfor % 
 % endfor % 
We also have a global key-value store to save information to be reused in another template or device. This is quite useful to automatically build DNS records. First, capture the IP address inserted into a template with store() as a filter:
interface Loopback0
 description 'Loopback:'
  % for address in lookup('topology', 'addresses').loopback tolist % 
 ipv  address ipv   address   address store('addresses', 'Loopback0') ipaddr('cidr')  
  % endfor % 
!
Then, reuse it later to build DNS records by iterating over store():4
 % for device, ip, interface in store('addresses') % 
   % set interface = interface replace('/', '-') replace('.', '-') replace(':', '-') % 
   % set name = ' . '.format(interface lower, device) % 
  name  . IN   'A' if ip ipv4 else 'AAAA'     ip ipaddr('address')  
 % endfor % 
Templates are compiled locally with ./run-jerikan build. The --limit argument restricts the devices to generate configuration files for. Build is not done in parallel because a template may depend on the data collected by another template. Currently, it takes 1 minute to compile around 3000 files spanning over 800 devices.
Jerikan outputs when building templates
Output of Jerikan after building configuration files for six devices
When an error occurs, a detailed traceback is displayed, including the template name, the line number and the value of all visible variables. This is a major time-saver compared to Ansible!
templates/opengear/config.j2:15: in top-level template code
    config.interfaces.  interface  .netmask   adddress   ipaddr("netmask")  
        continent  = 'us'
        device     = 'con1-ag2.ussfo03.blade-group.net'
        environment = 'prod'
        host       = 'con1-ag2.ussfo03'
        infos      =  'address': '172.30.24.19/21' 
        interface  = 'wan'
        location   = 'ussfo03'
        loop       = <LoopContext 1/2>
        member     = '2'
        model      = 'cm7132-2-dac'
        os         = 'opengear'
        shorthost  = 'con1-ag2'
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
value = JerkianUndefined, query = 'netmask', version = False, alias = 'ipaddr'
[ ]
        # Check if value is a list and parse each element
        if isinstance(value, (list, tuple, types.GeneratorType)):
            _ret = [ipaddr(element, str(query), version) for element in value]
            return [item for item in _ret if item]
>       elif not value or value is True:
E       jinja2.exceptions.UndefinedError: 'dict object' has no attribute 'adddress'
We don t have general-purpose rules when writing templates. Like for the source of truth, there is no need to create generic templates able to produce any BGP configuration. There is a balance to be found between readability and avoiding duplication. Templates can become scary and complex: sometimes, it s better to write a filter or a function in jerikan/jinja.py. Mastering Jinja2 is a good investment. Take time to browse through our templates as some of them show interesting features.

Checks Optionally, each configuration file can be validated by a script in the checks/ directory. Jerikan looks up the key checks in the build namespace to know which checks to run:
$ ./run-jerikan lookup edge1.ussfo03.blade-group.net build checks
- description: Juniper configuration file syntax check
  script: checks/junoser
  cache:
    input: config.txt
    output: config-set.txt
- description: check YAML data
  script: checks/data.yaml
  cache: data.yaml
In the above example, checks/junoser is executed if there is a change to the generated config.txt file. It also outputs a transformed version of the configuration file which is easier to understand when using diff. Junoser checks a Junos configuration file using Juniper s XML schema definition for Netconf.5 On error, Jerikan displays:
jerikan/build.py:127: RuntimeError
-------------- Captured syntax check with Junoser call --------------
P: checks/junoser edge2.ussfo03.blade-group.net
C: /app/jerikan
O:
E: Invalid syntax:  set system syslog archive size 10m files 10 word-readable
S: 1

Integration into GitLab CI The next step is to compile the templates using a CI. As we are using GitLab, Jerikan ships with a .gitlab-ci.yml file. When we need to make a change, we create a dedicated branch and a merge request. GitLab compiles the templates using the same environment we use on our laptops and store them as an artifact.
Merge requests in GitLab for a change
Merge request to add a new port in USSFO03. The templates were compiled successfully but approval from another team member is still required to merge.
Before approving the merge request, another team member looks at the changes in data and templates but also the differences for the generated configuration files:
Differences for generated configuration files
The change configures a port on the Juniper device, adds records to DNS, and updates NetBox with the new IP addresses (not shown).

Ansible After Jerikan has built the configuration files, Ansible takes over. It is also packaged as a Docker image to avoid the trouble to maintain the right Python virtual environment and ensure everyone is using the same versions.

Inventory Jerikan has generated an inventory file. It contains all the managed devices, the variables defined for each of them and the groups converted to Ansible groups:
ob1-n1.sk1.blade-group.net ansible_host=172.29.15.12 ansible_user=blade ansible_connection=network_cli ansible_network_os=ios
ob2-n1.sk1.blade-group.net ansible_host=172.29.15.13 ansible_user=blade ansible_connection=network_cli ansible_network_os=ios
ob1-n1.ussfo03.blade-group.net ansible_host=172.29.15.12 ansible_user=blade ansible_connection=network_cli ansible_network_os=ios
none ansible_connection=local
[oob]
ob1-n1.sk1.blade-group.net
ob2-n1.sk1.blade-group.net
ob1-n1.ussfo03.blade-group.net
[os-ios]
ob1-n1.sk1.blade-group.net
ob2-n1.sk1.blade-group.net
ob1-n1.ussfo03.blade-group.net
[model-c2960s]
ob1-n1.sk1.blade-group.net
ob2-n1.sk1.blade-group.net
ob1-n1.ussfo03.blade-group.net
[location-sk1]
ob1-n1.sk1.blade-group.net
ob2-n1.sk1.blade-group.net
[location-ussfo03]
ob1-n1.ussfo03.blade-group.net
[in-sync]
ob1-n1.sk1.blade-group.net
ob2-n1.sk1.blade-group.net
ob1-n1.ussfo03.blade-group.net
none
in-sync is a special group for devices which configuration should match the golden configuration. Daily and unattended, Ansible should be able to push configurations to this group. The mid-term goal is to cover all devices. none is a special device for tasks not related to a specific host. This includes synchronizing NetBox, IRR objects, and the DNS, updating the RPKI, and building the geofeed files.

Playbook We use a single playbook for all devices. It is described in the ansible/playbooks/site.yaml file. Here is a shortened version:
- hosts: adm-gateway:!done
  strategy: mitogen_linear
  roles:
    - blade.linux
    - blade.adm-gateway
    - done
- hosts: os-linux:!done
  strategy: mitogen_linear
  roles:
    - blade.linux
    - done
- hosts: os-junos:!done
  gather_facts: false
  roles:
    - blade.junos
    - done
- hosts: os-opengear:!done
  gather_facts: false
  roles:
    - blade.opengear
    - done
- hosts: none:!done
  gather_facts: false
  roles:
    - blade.none
    - done
A host executes only one of the play. For example, a Junos device executes the blade.junos role. Once a play has been executed, the device is added to the done group and the other plays are skipped. The playbook can be executed with the configuration files generated by the GitLab CI using the ./run-ansible-gitlab command. This is a wrapper around Docker and the ansible-playbook command and it accepts the same arguments. To deploy the configuration on the edge devices for the SK1 datacenter in check mode, we use:
$ ./run-ansible-gitlab playbooks/site.yaml --limit='edge:&location-sk1' --diff --check
[ ]
PLAY RECAP *************************************************************
edge1.sk1.blade-group.net  : ok=6    changed=0    unreachable=0    failed=0    skipped=3    rescued=0    ignored=0
edge2.sk1.blade-group.net  : ok=5    changed=0    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0
We have some rules when writing roles:
  • --check must detect if a change is needed;
  • --diff must provide a visualization of the planned changes;
  • --check and --diff must not display anything if there is nothing to change;
  • writing a custom module tailored to our needs is a valid solution;
  • the whole device configuration is managed;6
  • secrets must be stored in Vault;
  • templates should be avoided as we have Jerikan for that; and
  • avoid duplication and reuse tasks.7
We avoid using collections from Ansible Galaxy, the exception being collections to connect and interact with vendor devices, like cisco.iosxr collection. The quality of Ansible Galaxy collections is quite random and it is an additional maintenance burden. It seems better to write roles tailored to our needs. The collections we use are in ci/ansible/ansible-galaxy.yaml. We use Mitogen to get a 10 speedup on Ansible executions on Linux hosts. We also have a few playbooks for operational purpose: upgrading the OS version, isolate an edge router, etc. We were also planning on how to add operational checks in roles: are all the BGP sessions up? They could have been used to validate a deployment and rollback if there is an issue. Currently, our playbooks are run from our laptops. To keep tabs, we are using ARA. A weekly dry-run on devices in the in-sync group also provides a dashboard on which devices we need to run Ansible on.

Configuration data and templates Jerikan ships with pre-populated data and templates matching the configuration of our USSFO03 and SK1 datacenters. They do not exist anymore but, we promise, all this was used in production back in the days!
Network architecture for Blade datacenter
The latest iteration of our network infrastructure for SK1, USSFO03, and future data centers. The production network is using BGPttH using a spine-leaf fabric. The out-of-band network is using a simple L2 design, using the spanning tree protocol, as well as a set of console servers.
Notably, you can find the configuration for:
our edge routers
Some are running on Junos, like edge2.ussfo03, the others on IOS-XR, like edge1.sk1. The implemented functionalities are similar in both cases and we could swap one for the other. It includes the BGP configuration for transits, peerings, and IX as well as the associated BGP policies. PeeringDB is queried to get the maximum number of prefixes to accept for peerings. bgpq3 and a containerized IRRd help to filter received routes. A firewall is added to protect the routing engine. Both IPv4 and IPv6 are configured.
our BGP-based fabric
BGP is used inside the datacenter8 and is extended on bare-metal hosts. The configuration is automatically derived from the device location and the port number.9 Top-of-the-rack devices are using passive BGP sessions for ports towards servers. They are also serving a provisioning network to let them boot using DHCP and PXE. They also act as a DHCP server. The design is multivendor. Some devices are running Cumulus Linux, like to1-p1.ussfo03, while some others are running Junos, like to1-p2.ussfo03.
our out-of-band fabric
We are using Cisco Catalyst 2960 switches to build an L2 out-of-band network. To provide redundancy and saving a few bucks on wiring, we build small loops and run the spanning-tree protocol. See ob1-p1.ussfo03. It is redundantly connected to our gateway servers. We also use OpenGear devices for console access. See con1-n1.ussfo03
our administrative gateways
These Linux servers have multiple purposes: SSH jump boxes, rescue connection, direct access to the out-of-band network, zero-touch provisioning of network devices,10 Internet access for management flows, centralization of the console servers using Conserver, and API for autoconfiguration of BGP sessions for bare-metal servers. They are the first servers installed in a new datacenter and are used to provision everything else. Check both the generated files and the associated Ansible tasks.

  1. Ansible does not even provide a line number when there is an error in a template. You may need to find the problem by bisecting.
    $ ansible --version
    ansible 2.10.8
    [ ]
    $ cat test.j2
    Hello   name  !
    $ ansible all -i localhost, \
    >  --connection=local \
    >  -m template \
    >  -a "src=test.j2 dest=test.txt"
    localhost   FAILED! =>  
        "changed": false,
        "msg": "AnsibleUndefinedVariable: 'name' is undefined"
     
    
  2. You may recognize the same concepts as in Hiera, the hierarchical key-value store from Puppet. At first, we were using Jerakia, a similar independent store exposing an HTTP REST interface. However, the lookup overhead is too large for our use. Jerikan implements the same functionality within a Python function.
  3. The list of available filters is mangled inside jerikan/jinja.py. This is a remain of the fact we do not maintain Jerikan as a standalone software.
  4. This is a bit confusing: we have a store() filter and a store() function. With Jinja2, filters and functions live in two different namespaces.
  5. We are using a fork with some modifications to be able to validate our configurations and exposing an HTTP service to reduce the time spent on each configuration check.
  6. There is a trend in network automation to automate a configuration subset, for example by having a playbook to create a new BGP session. We believe this is wrong: with time, your configuration will get out-of-sync with its expected state, notably hand-made changes will be left undetected.
  7. See ansible/roles/blade.linux/tasks/firewall.yaml and ansible/roles/blade.linux/tasks/interfaces.yaml. They are meant to be called when needed, using import_role.
  8. We also have some datacenters using BGP EVPN VXLAN at medium-scale using Juniper devices. As they are still in production today, we didn t include this feature but we may publish it in the future.
  9. In retrospect, this may not be a good idea unless you are pretty sure everything is uniform (number of switches for each layer, number of ports). This was not our case. We now think it is a better idea to assign a prefix to each device and write it in the source of truth.
  10. Non-linux based devices are upgraded and configured unattended. Cumulus Linux devices are automatically upgraded on install but the final configuration has to be pushed using Ansible: we didn t want to duplicate the configuration process using another tool.

4 May 2021

Steve Kemp: Password store plugin: env

Like many I use pass for storing usernames and passwords. This gives me easy access to credentials in a secure manner. I don't like the way that the metadata (i.e. filenames) are public, but that aside it is a robust tool I've been using for several years. The last time I talked about pass was when I talked about showing the age of my credentials, via the integrated git support. That then became a pass-plugin:
  frodo ~ $ pass age
  6 years ago GPG/root@localhost.gpg
  6 years ago GPG/steve@steve.org.uk.OLD.gpg
  ..
  4 years, 8 months ago Domains/Domain.fi.gpg
  4 years, 7 months ago Mobile/dna.fi.gpg
  ..
  1 year, 3 months ago Websites/netlify.com.gpg
  1 year ago Financial/ukko.fi.gpg
  1 year ago Mobile/KiK.gpg
  4 days ago Enfuce/sre.tst.gpg
  ..
Anyway today's work involved writing another plugin, named env. I store my data in pass in a consistent form, each entry looks like this:
   username: steve
   password: secrit
   site: http://example.com/login/blah/
   # Extra data
The keys vary, sometimes I use "login", sometimes "username", other times "email", but I always label the fields in some way. Recently I was working with some CLI tooling that wants to have a username/password specified and I patched it to read from the environment instead. Now I can run this:
     $ pass env internal/cli/tool-name
     export username="steve"
     export password="secrit"
That's ideal, because now I can source that from within a shell:
   $ source <(pass env internal/cli/tool-name)
   $ echo username
   steve
Or I could directly execute the tool I want:
   $ pass env --exec=$HOME/ldap/ldap.py internal/cli/tool-name
   you are steve
   ..
TLDR: If you store your password entries in "key: value" form you can process them to export $KEY=$value, and that allows them to be used without copying and pasting into command-line arguments (e.g. "~/ldap/ldap.py --username=steve --password=secrit")

Steve Kemp: Password store plugin: enve

Like many I use pass for storing usernames and passwords. This gives me easy access to credentials in a secure manner. I don't like the way that the metadata (i.e. filenames) are public, but that aside it is a robust tool I've been using for several years. The last time I talked about pass was when I talked about showing the age of my credentials, via the integrated git support. That then became a pass-plugin:
  frodo ~ $ pass age
  6 years ago GPG/root@localhost.gpg
  6 years ago GPG/steve@steve.org.uk.OLD.gpg
  ..
  4 years, 8 months ago Domains/Domain.fi.gpg
  4 years, 7 months ago Mobile/dna.fi.gpg
  ..
  1 year, 3 months ago Websites/netlify.com.gpg
  1 year ago Financial/ukko.fi.gpg
  1 year ago Mobile/KiK.gpg
  4 days ago Enfuce/sre.tst.gpg
  ..
Anyway today's work involved writing another plugin, named env. I store my data in pass in a consistent form, each entry looks like this:
   username: steve
   password: secrit
   site: http://example.com/login/blah/
   # Extra data
The keys vary, sometimes I use "login", sometimes "username", other times "email", but I always label the fields in some way. Recently I was working with some CLI tooling that wants to have a username/password specified and I patched it to read from the environment instead. Now I can run this:
     $ pass env internal/cli/tool-name
     export username="steve"
     export password="secrit"
That's ideal, because now I can source that from within a shell:
   $ source <(pass env internal/cli/tool-name)
   $ echo username
   steve
Or I could directly execute the tool I want:
   $ pass env --exec=$HOME/ldap/ldap.py internal/cli/tool-name
   you are steve
   ..
TLDR: If you store your password entries in "key: value" form you can process them to export $KEY=$value, and that allows them to be used without copying and pasting into command-line arguments (e.g. "~/ldap/ldap.py --username=steve --password=secrit")

28 March 2021

Norbert Preining: RMS, Debian, and the world

Too much has been written, a war of support letters is going on, Debian is heading head first like Lemmings into the same war. And then, there is this by the first female President of the American Civil Liberties Union (ACLU): Civil Rights Activist Nadine Strossen s Response To #CancelStallman:
I find it so odd that the strong zeal for revenge and punishment if someone says anything that is perceived to be sexist or racist or discriminatory comes from liberals and progressives. There are so many violations [in cases like Stallman s] of such fundamental principles to which progressives and liberals cling in general as to what is justice, what is fairness, what is due process.
One is proportionality: that the punishment should be proportional to the offense. Another one is restorative justice: that rather than retribution and punishment, we should seek to have the person constructively come to understand, repent, and make amends for an infraction. Liberals generally believe society to be too punitive, too harsh, not forgiving enough. They are certainly against the death penalty and other harsh punishments even for the most violent, the mass murderers. Progressives are right now advocating for the release of criminals, even murderers. To then have exactly the opposite attitude towards something that certainly is not committing physical violence against somebody, I don t understand the double standard!
Another cardinal principle is we shouldn t have any guilt by association. [To hold culpable] these board members who were affiliated with him and ostensibly didn t do enough to punish him for things that he said which by the way were completely separate from the Free Software Foundation is multiplying the problems of unwarranted punishment. It extends the punishment where the argument for responsibility and culpability becomes thinner and thinner to the vanishing point. That is also going to have an enormous adverse impact on the freedom of association, which is an important right protected in the U.S. by the First Amendment.
The Supreme Court has upheld freedom of association in cases involving organizations that were at the time highly controversial. It started with NAACP (National Association for the Advancement of Colored People) during the civil rights movement in the 1950s and 60s, but we have a case that s going to the Supreme Court right now regarding Black Lives Matter. The Supreme Court says even if one member of the group does commit a crime in both of those cases physical violence and assault that is not a justification for punishing other members of the group unless they specifically intended to participate in the particular punishable conduct.
Now, let s assume for the sake of argument, Stallman had an attitude that was objectively described as discriminatory on the basis on race and gender (and by the way I have seen nothing to indicate that), that he s an unrepentant misogynist, who really believes women are inferior. We are not going to correct those ideas, to enlighten him towards rejecting them and deciding to treat women as equals through a punitive approach! The only approach that could possibly work is an educational one! Engaging in speech, dialogue, discussion and leading him to re-examine his own ideas.
Even if I strongly disagree with a position or an idea, an expression of an idea, advocacy of an idea, and even if the vast majority of the public disagrees with the idea and finds it offensive, that is not a justification for suppressing the idea. And it s not a justification for taking away the equal rights of the person who espouses that idea including the right to continue holding a tenured position or other prominent position for which that person is qualified.
But a number of the ideas for which Richard Stallman has been attacked and punished are ideas that I as a feminist advocate of human rights find completely correct and positive from the perspective of women s equality and dignity! So for example, when he talks about the misuse and over use and flawed use of the term sexual assault, I completely agree with that critique! People are indiscriminantly using that term or synonyms to describe everything from the most appaulling violent abuse of helpless vulnerable victims (such as a rape of a minor) to any conduct or expression in the realm of gender or sexuality that they find unpleasant or disagreeable.
So we see the term sexual assault and sexual harrassment used for example, when a guy asks a woman out on a date and she doesn t find that an appealing invitation. Maybe he used poor judgement in asking her out, maybe he didn t, but in any case that is NOT sexual assault or harassment. To call it that is to really demean the huge horror and violence and predation that does exist when you are talking about violent sexual assault. People use the term sexual assault/ sexual harassment to refer to any comment about gender or sexuality issues that they disagree with or a joke that might not be in the best taste, again is that to be commended? No! But to condemn it and equate it with a violent sexual assault again is really denying and demeaning the actual suffering that people who are victims of sexual assault endure. It trivializes the serious infractions that are committed by people like Jeffrey Epstein and Harvey Weinstein. So that is one point that he made that I think is very important that I strongly agree with.
Secondly and relatedly, [Richard Stallman] never said that he endorse child pornography, which by definition the United States Supreme Court has defined it multiple times is the sexual exploitation of an actual minor. Coerced, forced, sexual activity by that minor, with that minor that happens to be filmed or photographed. That is the definition of child pornography. He never defends that! What the point he makes, a very important one, which the U.S. Supreme Court has also made, is mainly that we overuse and distort the term child pornography to refer to any depiction of any minor in any context that is even vaguely sexual.
So some people have not only denounced as child pornography but prosecuted and jailed loving devoted parents who committed the crime of taking a nude or semi-nude picture of their own child in a bathtub or their own child in a bathing suit. Again it is the hysteria that has totally refused to draw an absolutely critical distinction between actual violence and abuse, which is criminal and should be criminal, to any potentially sexual depiction of a minor. And I say potentially because I think if you look at a picture a parent has taken of a child in a bathtub and you see that as sexual, then I d say there s something in your perspective that might be questioned or challenged! But don t foist that upon the parent who is lovingly documenting their beloved child s life and activities without seeing anything sexual in that image.
This is a decision that involves line drawing. We tend to have this hysteria where once we hear terms like pedophilia of course you are going to condemn anything that could possibly have that label. Of course you would. But societies around the world throughout history various cultures and various religions and moral positions have disagreed about at what age do you respect the autonomy and individuality and freedom of choice of a young person around sexuality. And the U.S. Supreme Court held that in a case involving minors right to choose to have an abortion.
By the way, [contraception and abortion] is a realm of sexuality where liberals and progressives and feminists have been saying, Yes! If you re old enough to have sex. You should have the right to contraception and access to it. You should have the right to have an abortion. You shouldn t have to consult with your parents and have their permission or a judge s permission because you re sufficiently mature. And the Supreme Court sided in accord of that position. The U.S. Supreme Court said constitutional rights do not magically mature and spring into being only when someone happens to attain the state defined age of majority.
In other words the constitution doesn t prevent anyone from exercising rights, including Rights and sexual freedoms, freedom of choice and autonomy at a certain age! And so you can t have it both ways. You can t say well we re strongly in favor of minors having the right to decide what to do with their own bodies, to have an abortion what is in some people s minds murder but we re not going to trust them to decide to have sex with somewhat older than they are.
And I say somewhat older than they are because that s something where the law has also been subject to change. On all issues of when you obtain the age of majority, states differ on that widely and they choose different ages for different activities. When you re old enough to drive, to have sex with someone around your age, to have sex with someone much older than you. There is no magic objective answer to these questions. I think people need to take seriously the importance of sexual freedom and autonomy and that certainly includes women, feminists. They have to take seriously the question of respecting a young person s autonomy in that area.
There have been famous cases of 18 year olds who have gone to prison because they had consensual sex with their girlfriends who were a couple of years younger. A lot of people would not consider that pedophilia and yet under some strict laws and some absolute definitions it is. Romeo and Juliet laws make an exception to pedophilia laws when there is only a relatively small age difference. But what is relatively small? So to me, especially when he says he is re-examining his position, Stallman is just thinking through the very serious debate of how to be protective and respectful of young people. He is not being disrespectful, much less wishing harm upon young people, which seems to be what his detractors think he s doing.
Unfortunately, I don t think the Anti-Harassment Team of Debian and others of the usual group of warriors will ever read less understand what is written there. So sad.

27 March 2021

Neil Williams: Free and Open

A long time, no blog entries. What's prompted a new missive? Two things: All the time I've worked with software and computers, the lack of diversity in the contributors has irked me. I started my career in pharmaceutical sciences where the mix of people, at university, would appear to be genuinely diverse. It was in the workplace that the problems started, especially in the retail portion of community pharmacy with a particular gender imbalance between management and counter staff. Then I started to pick up programming. I gravitated to free software for the simple reason that I could tinker with the source code. To do a career change and learn how to program with a background in a completely alien branch of science, the only realistic route into the field is to have access to the raw material - source code. Without that, I would have been seeking a sponsor, in much the same way as other branches of science need grants or sponsors to purchase the equipment and facilities to get a foot in the door. All it took from me was some time, a willingness to learn and a means to work alongside those already involved. That's it. No complex titration equipment, flasks, centrifuges, pipettes, spectrographs, or petri dishes. It's hard to do pharmaceutical science from home. Software is so much more accessible - but only if the software itself is free. Software freedom removes the main barrier to entry. Software freedom enables the removal of other barriers too. Once the software is free, it becomes obvious that the compiler (indeed the entire toolchain) needs to be free, to turn the software into something other people can use. The same applies to interpreters, editors, kernels and the entire stack. I was in a different branch of science whilst all that was being created and I am very glad that Debian Woody was available as a free cover disc on a software magazine just at the time I was looking to pick up software programming. That should be that. The only next step to be good enough to write free software was the "means to work alongside those already involved". That, it turns out, is much more than just a machine running Debian. It's more than just having an ISP account and working email (not commonplace in 2003). It's working alongside the people - all the people. It was my first real exposure to the toxicity of some parts of many scientific and technical arenas. Where was the diversity? OK, maybe it was just early days and the other barriers (like an account with an ISP) were prohibitive for many parts of the world outside Europe & USA in 2003, so there were few people from other countries but the software world was massively dominated by the white Caucasian male. I'd been insulated by my degree course, and to a large extent by my university which also had courses which already had much more diverse intakes - optics and pharmacy, business and human resources. Relocating from the insular world of a little town in Wales to Birmingham was also key. Maybe things would improve as the technical barriers to internet connectivity were lowered. Sadly, no. The echo chamber of white Caucasian input has become more and more diluted as other countries have built the infrastructure to get the populace online. Debian has helped in this area, principally via DebConf. Yet only the ethnicity seemed to change, not the diversity. Recently, more is being done to at least make Debian more welcoming to those who are brave enough to increase the mix. Progress is much slower than the gains seen in the ethnicity mix, probably because that was a benefit of a technological change, not a societal change. The attitudes so prevalent in the late 20th century are becoming less prevalent amongst, and increasingly abhorrent to, the next generation of potential community members. Diversity must come or the pool of contributors will shrink to nil. Community members who cling to these attitudes are already dinosaurs and increasingly unwelcome. This is a necessary step to retain access to new contributors as existing contributors age. To be able to increase the number of contributors, the community cannot afford to be dragged backwards by anyone, no matter how important or (previously) respected. Debian, or even free software overall, cannot change all the problems with diversity in STEM but we must not perpetuate the problems either. Those people involved in free software need to be open to change, to input from all portions of society and welcoming. Puerile jokes and disrespectful attitudes must be a thing of the past. The technical contributions do not excuse behaviours that act to prevent new people joining the community. Debian is getting older, the community and the people. The presence of spouses at Debian social events does not fix the problem of the lack of diversity at Debian technical events. As contributors age, Debian must welcome new, younger, people to continue the work. All the technical contributions from the people during the 20th century will not sustain Debian in the 21st century. Bit rot affects us all. If the people who provided those contributions are not encouraging a more diverse mix to sustain free software into the future then all their contributions will be for nought and free software will die. So it comes to the FSF and RMS. I did hear Richard speak at an event in Bristol many years ago. I haven't personally witnessed the behavioural patterns that have been described by others but my memories of that event only add to the reality of those accounts. No attempts to be inclusive, jokes that focused on division and perpetuating the white male echo chamber. I'm not perfect, I struggle with some of this at times. Personally, I find the FSFE and Red Hat statements much more in line with my feelings on the matter than the open letter which is the basis of the GR. I deplore the preliminary FSF statement on governance as closed-minded, opaque and archaic. It only adds to my concerns that the FSF is not fit for the 21st century. The open letter has the advantage that it is a common text which has the backing of many in the community, individuals and groups, who are fit for purpose and whom I have respected for a long time. Free software must be open to contributions from all who are technically capable of doing the technical work required. Free software must equally require respect for all contributors and technical excellence is not an excuse for disrespect. Diversity is the life blood of all social groups and without social cohesion, technical contributions alone do not support a community. People can change and apologies are welcome when accompanied by modified behaviour. The issues causing a lack of diversity in Debian are complex and mostly reflective of a wider problem with STEM. Debian can only survive by working to change those parts of the wider problem which come within our influence. Influence is key here, this is a soft issue, replete with unreliable emotions, unhelpful polemic and complexity. Reputations are hard won and easily blemished. The problems are all about how Debian looks to those who are thinking about where to focus in the years to come. It looks like the FSFE should be the ones to take the baton from the FSF - unless the FSF can adopt a truly inclusive and open governance. My problem is not with RMS himself, what he has or has not done, which apologies are deemed sincere and whether behaviour has changed. He is one man and his contributions can be respected even as his behaviour is criticised. My problem is with the FSF for behaving in a closed, opaque and divisive manner and then making a governance statement that makes things worse, whilst purporting to be transparent. Institutions lay the foundations for the future of the community and must be expected to hold individuals to account. Free and open have been contentious concepts for the FSF, with all the arguments about Open Source which is not Free Software. It is clear that the FSF do understand the implications of freedom and openness. It is absurd to then adopt a closed and archaic governance. A valid governance model for the FSF would never have allowed RMS back onto the board, instead the FSF should be the primary institution to hold him, and others like him, to account for their actions. The FSF needs to be front and centre in promoting diversity and openness. The FSF could learn from the FSFE. The calls for the FSF to adopt a more diverse, inclusive, board membership are not new. I can only echo the Red Hat statement:
in order to regain the confidence of the broader free software community, the FSF should make fundamental and lasting changes to its governance.
...
[There is] no reason to believe that the most recent FSF board statement signals any meaningful commitment to positive change.
And the FSFE statement:
The goal of the software freedom movement is to empower all people to control technology and thereby create a better society for everyone. Free Software is meant to serve everyone regardless of their age, ability or disability, gender identity, sex, ethnicity, nationality, religion or sexual orientation. This requires an inclusive and diverse environment that welcomes all contributors equally.
The FSF has not demonstrated the behaviour I expect from a free software institution and I cannot respect an institution which proclaims a mantra of freedom and openness that is not reflected in the governance of that institution. The preliminary statement by the FSF board is abhorrent. The FSF must now take up the offers from those institutions within the community who retain the respect of that community. I'm only one voice, but I would implore the FSF to make substantive, positive and permanent change to the governance, practices and future of the FSF or face irrelevance.

24 March 2021

Sam Hartman: Making our Community Safe: the FSF and rms

I felt disgust and horror when I learned yesterday that rms had returned to the FSF board. When rms resigned back in September of 2019, I was Debian Project Leader. At that time, I felt two things. First, I was happy that the community was finally taking a stand in favor of inclusion, respect, and creating a safe, welcoming place to do our work. It was long past time for rms to move on. But I also felt thankful that rms was not my problem to solve. In significant part because of rms, I had never personally been that involved in the FSF. I considered drafting a statement as Debian Project Leader. I could have talked about how through our Diversity Statement and Code of Conduct we had taken a stand in favor of inclusion and respect. I could have talked about how rms's actions displayed a lack of understanding and empathy and how this created a community that was neither welcoming nor respectful. I didn't. I guess I didn't want to deal with confirming I had sufficient support in the project. I wanted to focus on internal goals, and I was healing and learning from some mistakes I made earlier in the year. It looked like other people were saying what needed to be said and my voice was not required. Silence was a mistake.

It's a mistake I've been making all throughout my interactions with rms. Enough is enough. It's long past time I added my voice to those who cry for accountability and who will not sit aside while rms's disrespect and harm is tolerated.

The first time I was silent about rms was around 15 years ago. I was at a science fiction convention in a crowded party. I didn't know anyone, other than the host of the party. I was out of my depth. I heard his voice---I recognized it from the Share the Software Song. He was hitting on some girl, talking about how he invented Emacs. As best I could tell, she didn't even know what Emacs was. Back then, I wondered what she saw in the interaction; why she stuck around even though she didn't know what he was talking about. I sure didn't want to be around; the interaction between the two of them was making me uncomfortable. Besides, the wings on her costume kept hitting me in the face. So I left as fast as I could.

I've learned a lot about creating safe spaces and avoiding sexual harassment since then. Thinking back, she was probably hitting me because she was trying to back away and getting crowded. If this happened today, I think I would do a better job of owning my responsibility for helping keep the space around me safe. I've learned better techniques for checking in to make sure people around me are comfortable.

I didn't come to silence alone: I had been educated into the culture of avoiding rms and not calling him out. There was a running game in the group of computer security professionals I learned from. The goal was to see how much you could contribute to free software and computer security without being recognized by or interacting with rms. And so, indoctrinated into a culture of silence about the harm that rms caused, I took my first step.

Things weren't much better when I attended Libreplanet 2019 just before taking office as Debian Project Leader. I had stayed away from the conference in large part because of rms. But there were Debian people there, and I was missing community interaction. Unfortunately, I saw that even after the problems of 2018, rms was still treating himself as above community standards. He interrupted speakers, objecting to how they phrased the problem they were considering. After a speech on codes of conduct in the free software community, he cornered the talk organizer to "ask her opinion" about the GNU project's lack of code on conduct. He wasn't asking for an opinion. He was justifying himself; there wasn't much listening in the conversation I heard. Aspects of that conversation crossed professional boundaries for what should be said. The talk organizer was okay--we talked about it after--but if we did a better job of policing our community, that wouldn't even be a question. I think the most telling sign was a discussion with an FSF board member. We were having a great conversation, but he had to interrupt it. He was on rms duty (my words) at the next session. The board had decided it was necessary to have members there so that the staff would not be put in awkward positions by their president. If someone needed to call rms out, it could be a board member rather than the staff members of the conduct team.

And yet again, I held my silence. It's so easy to keep silent. It's not that I never speak up. There are communities where I have called people out. But it's hard to paint that target on yourself. It's hard to engage and to stand strong for a community's standards when you aren't the target. It's hard to approach these problems while maintaining empathy for everyone involved. Some people give into the rage; I don't have that option if I want to be the person I've chosen to be. And so, when I do speak up, the emotional cost is high.

Yet, it's long past time I raised my voice on this issue. Rms has demonstrated that he cannot hold to standards of respect for others, respect for their boundaries, or standards of community safety. We need those standards to be a welcoming community.

If the people who came before me--those who taught me the game of avoiding rms--had spoken up, the community could have healed before I even came on the scene. If I and others had stood up fifteen years ago, we'd have another couple generations who were more used to respect, inclusion, welcoming and safety. The FSF board could have done their job back in 2018. And perhaps if more of us had spoken out in 2019, the FSF board would have found the strength to stand strong and not accept rms's return.

And so, finally, I raise my voice. I signed the open letter calling for the resignation of rms and the entire FSF board. Perhaps if we all get used to raising our voice, it will be easier. Perhaps if we stand together, taking the path of community rather than the path of silence, we'll have the support we need to create communities inclusive enough to welcome everyone who can contribute. For me, I'm done being silent.

There's one criticism of the open letter I'd like to respond to. I've heard concerns about asking for the resignation of the entire FSF board under the understanding that some board members voted against rms's return. It should be obvious why those who voted for rms's return need to resign. But resignation does not always mean you did something wrong. If you find yourself in a leadership role in an organization that takes decisions in significant conflict with your standards of ethics, resignation is also the right path. Staying on the board even if you voted against rms's return means that you consider voting for rms to be a reasonable thing to do. It means that even if you disagreed with it, you can still be part of an organization that takes the path of welcoming rms. At this point, I cannot do that, nor can I support leaders in the FSF who do.

23 March 2021

Gunnar Wolf: Regarding the Stallman comeback

Context:
  1. Richard Stallman is the founder of the Free Software movement, and commited his life to making what seemed like a ludicrous idea into a tangible reality. We owe him big time for that, and nothing somebody says or does will ever eclipse the fact.
  2. But Richard Stallman has a very toxic personality. There is a long, well-known published list of abuse cases; if you must read more into it, some regarding his views on sex, consent, gender, and some other issues are published as a part of the open letter I am about to reference. I have witnessed quite a few; I won t disclose here the details, as many other incidents are already known. And I don t mean by this sexual abuse, although that s the twig that eventually broke the camel s back, but ranging from general rudeness to absolute lack of consideration for people around him.
  3. In September 2019, Stallman was Forced to resign, first from his position at MIT, then as the president of the FSF. The direct cause was a comment where he defended the accusations on Minsky (a personal friend of his, and deceased three years prior to the fact) of sexual abuse.
  4. Last week, 18 months after he was driven out of the FSF, and at LibrePlanet (FSF s signature conference, usually held at the MIT, this time naturally online only) Stallman announced his comeback to the Board of Directors of the FSF.
Many people (me included, naturally) in the Free Software world are very angry about this announcement. There is a call for signatures for a position statement presented by several free software leaders that has gathered, as I write this message, over 400 signatures. The Open Source Initiative has presented its institutional position statement. And I can only forecast this rejection will continue to grow. Free software was once the arena of young, raging alpha machos where a thick skin was an entry requirement. A good thing about growing up is that our community is now wiser, and although it still attracts younger people, there is a clear trend not to repeat our past ways. Free software has grown, and there is no place for a leader so disrespectful and hurting as many of us have witnessed Stallman to be. Again, the free software movement and the world as a whole owes a great deal to Stallman. He changed history. I admire his work, his persistence and his stubbornness. But I won t have him represent me.

22 March 2021

Wouter Verhelst: Twenty years of Debian

Ten years ago, I reflected on the fact that -- by that time -- I had been in Debian for just over ten years. This year, in early February, I've passed the twenty year milestone. As I'm turning 43 this year, I will have been in Debian for half my life in about three years. Scary thought, that. In the past ten years, not much has changed, and yet at the same time, much has. I became involved in the Debian video team; I stepped down from the m68k port; and my organizing of the Debian devroom at FOSDEM resulted in me eventually joining the FOSDEM orga team, where I eventually ended up also doing video. As part of my video work, I wrote SReview, for which in these COVID-19 times in much of my spare time I have had to write new code and/or fix bugs. I was a candidate for the position of DPL one more time, without being elected. I was also a candidate for the technical committee a few times, also without success. I also added a few packages to the list of packages that I maintain for Debian; most obviously this includes SReview, but there's also things like extrepo and policy-rcd-declarative, both fairly recent packages that I hope will improve Debian as a whole in the longer term. On a more personal level, at one debconf I met a wonderful girl that I now have just celebrated my first wedding anniversary with. Before that could happen, I have had to move to South Africa two years ago. Moving is an involved process at any one time; moving to a different continent altogether is even more so. As it would have been complicated and involved to remain a business owner of a Belgian business while living 9500km away from the country, I sold my shares to my (now ex) business partner; it turned the page of a 15-year chapter of my life, something I could not do without feelings one way or the other. The things I do in Debian has changed over the past twenty years. I've been the maintainer of the second-highest number of packages in the project when I maintained the Linux Gazette packages; I've been an m68k porter; I've been an AM, and briefly even an NM frontdesk member; I've been a DPL candidate three times, and a TC candidate twice. At the turn of my first decade of being a Debian Developer, I noted that people started to recognize my name, and that I started to be one of the Debian Developers who had been with the project longer than most. This has, obviously, not changed. New in the "I'm getting old" department is the fact that during the last Debconf, I noticed for the first time that there was a speaker who had been alive for less long than I had been a Debian Developer. I'm assuming these types of things will continue happening in the next decade, and that the future will bring more of these kinds of changes that will make me feel older as I and the project mature more. I'm looking forward to it. Here's to you, Debian; may you continue to influence my life, in good ways and in bad (but hopefully mostly good), as well as continue to inspire me to improve the world, as you have over the past twenty years!

18 March 2021

Arturo Borrero Gonz lez: Debugging ip token set RTNETLINK error

Networking At the Wikimedia Foundation they configure basically all servers with IPv4/IPv6 dual stack, at least in the control plane interface (those used for SSH management, etc). IPv6 is not supported yet on the Cloud Services dataplane (openstack), but it will in the near future. An elegant solution for this IPv4/IPv6 dual stack configuration in the control plane is to embed the IPv4 address into the IPv6 address, something like this:
IPv4: 208.80.154.23
IPv6: 2620:0:861:1:208:80:154:23
                   ^^^^^^^^^^^^^
The gateways send prefix advertisements and by default IPv6 autoconfiguration uses such routers prefixes plus the interface MAC address to build the final address. This behavior can be somewhat changed, you can explicitly set some data instead of the MAC address for the kernel to use when building the final IPv6 address. The trick is to use the ip token command, and further context and reasoning for this can be seen in the puppet manifest that the SRE team uses to configure this:
The token command explicitly configures the lower 64 bits that will be used with any autoconf
address, as opposed to one derived from the macaddr. This aligns the autoconf-assigned address with
the fixed one set above, and can do so as a pre-up command to avoid ever having another address
even temporarily, when this is all set up before boot.
For example:
user@debian:~$ sudo ip token list
token ::208:80:154:23 dev eno1
token :: dev eno2
token :: dev eno3
token :: dev eno4
user@debian:~$ sudo ip token set dev eno2 ::beef
user@debian:~$ ip token get dev eno2
token ::beef dev eno2
user@debian:~$ ip token list
token ::208:80:154:23 dev eno1
token ::beef dev eno2
token :: dev eno3
token :: dev eno4
I was installing a new server the other day and the networking service (ifupdown) was failing to bring the interface up. The issue was the ip token command failing:
root@cloudgw2002-dev:~# ip token set ::10:192:20:18 dev eno1
RTNETLINK answers: Invalid argument
RTNETLINK error reporting is awful. What could be wrong about that command? Well, I had to dig into the kernel source code to try figuring out where was that EINVAL being generated. The routine can be seen in the function inet6_set_iftoken() (source):
static int inet6_set_iftoken(struct inet6_dev *idev, struct in6_addr *token)
 
	struct inet6_ifaddr *ifp;
	struct net_device *dev = idev->dev;
	bool clear_token, update_rs = false;
	struct in6_addr ll_addr;
	ASSERT_RTNL();
	if (!token)
		return -EINVAL;
	if (dev->flags & (IFF_LOOPBACK   IFF_NOARP))
		return -EINVAL;
	if (!ipv6_accept_ra(idev))
		return -EINVAL;
	if (idev->cnf.rtr_solicits == 0)
		return -EINVAL;
[..]
I couldn t identify the reason at first glance, apparently I pass every check in there: Something was wrong here. Upon deeper checks, I finally read the ipv6_accept_ra() function (source):
static inline bool ipv6_accept_ra(struct inet6_dev *idev)
 
	/* If forwarding is enabled, RA are not accepted unless the special
	 * hybrid mode (accept_ra=2) is enabled.
	 */
	return idev->cnf.forwarding ? idev->cnf.accept_ra == 2 :
	    idev->cnf.accept_ra;
 
That was it! Forwarding was enabled on the interface, and it shouldn t. I fixed my configuration with a simple patch for the sysctl configuration. Original issue on the Wikimedia Foundation ticketing system: Phabricator T277287

6 March 2021

Shirish Agarwal: Making life difficult

Freedom house puts India in partly free Just couple of days ago, freedom house published its 2020 rankings for all countries including India. While freedom house shared how democracy in the world has weakened, India chose to take offense about it being called partly free .
India, leader in Internet shutdowns Access Now (copyright)
The above illustration is shared by accessnow . The next big ones who have Internet shutdowns are Yemen 6 and Ethiopia 4. Such internet shutdowns have and will have sad repercussions as would share in another story as well.

Color-coding journalists A story was broken by caravan magazine yesterday and which was followed by newslaundry which shows how the Govt. is looking to just drive some narrative, does not matter whether it s true or false, it should just show that the Govt. is right and others are all wrong. As can be seen, almost all reporters barring a few have kept silent rather than refuting statements attributed to them or happenings which didn t happen. And this goes to a much larger narrative and disinformation route taken by the Govt. which doesn t have any semblance to the truth or reality as people know it. I would illustrate couple of examples below which shares that. In all my young and even adult-life I hadn t seen a Govt. this much against its own people.

Omega Seiki puts a manufacturing plant in Bangladesh Now Omega Seiki is an Indian vendor who chose or had to go to manufacture their electric vehicles in Bangladesh. Now while this is a slightly old story this was broken on social media recently. Everybody starting blaming both the vendor and saying we should break FTA (Free Trade Agreement) with Bangladesh, not knowing that despite the FTA, India has put tariff barriers between India and Bangladesh. I had to share research from Brookings to show where India has been losing. Of course, those who don t want to see, wouldn t see anything wrong in the picture.

Teen raped, asked to marry the accused when she turns 18 Now you may see the above headlines and feel it is ridiculous, but the fact is that these orders were put or given by Madras High Court couple of months back. This was then reported by both Livelaw and BarandBench respectively. Now to be truthful, this news didn t make much noise as it should have, probably as I had shared previously that the Govt. wants to lower the marriageable age to 15 or even less. And this is despite all the medical evidence on the contrary, because it assuages this Govt s masculinity. There is also the very recent case where the SC CJI asked the rapist if he is willing to marry a girl who was underage when she reaches maturity. Another one in which it seems martial rape is not a crime according to the CJI. So it seems these are the state of things in which India finds itself today. There are judges like Vrinda Grover who do question CJI but they are few and there are costs to them who ask questions. Although, as shared this news was overtaken by other news and would have remained so, if not one of the leaders of the present Govt. , a Ramesh Jarkiholi, who hails from Belagavi region of north Karnataka was caught in a sex CD scandal basically asking sexual favors for a permanent Govt. job. He had made statements after the Madras High Court case applauding the judgement given by the judge. While, due to public pressure he had to resign, but not before stating that he had everybody blue films including the Chief Minister of the State. And sad to report that six Karnataka Ministers rushed today or rather yesterday to put a petition in the civil court to restrain media from airing/printing/publishing any defamatory content against them. The court has granted a media gag against 68 media houses for the same. Sadly, the recent happening only reinforce what has been happening in Karnataka since a decade. Update 07/03/2021 Seems yesterday another 10 odd ministers rushed to get the same order. Seems different laws apply to politicians vis-a-vis others. A recent example of Rhea Chakravarthy, an actress and girlfriend of Sushant Singh who was hounded in his suicide case and many accusations made on TV but no evidence till date. From what we know as facts, Sushant committed suicide as he was not getting work due to cronyism in Bollywood. In fact, those who were behind it have white-washed themselves, deleted their tweets etc. and while the public knows, no accountability on them. In fact, there is and was so much that I wanted to share as to what has been happening to women, sadly and thankfully arre did the needful for me. They wrote an entire article which tells what the situation for women in India today is. And if you are wondering why I said, that is because when a site which was made exclusively for people to laugh and have a good time and get relief, when they start writing serious articles, you can be sure that things have gone horribly wrong

Asking Tesla to come to India and at the same time ambivalent on battery Recently, Mr. Nitin Gadkari, a prominent minister of the present Govt. invited Tesla and gave all sorts of incentives to start a manufacturing plant here in India. And while it seems that Tesla has accepted, looking at the Vodafone case, hopefully Tesla does make such contracts where if something goes wrong and they need to sue the Govt. they can do it in States or elsewhere. The way the Govt. acted in the Vodafone case had been a dampener to any MNC investments so far. Although to be fair to both Tesla and GOI, the basic models even if they are manufactured in India will go to less than 1% of the population. The cheapest Tesla Model Y which retails in the U.S. for USD 40k would be around INR 30 lakh. And this is their cheapest car to date. I do know there are rumors of the 25k but that is probably 2-3 years away as shared by Tesla China President Tom Zhu in an interview shared on YT.

https://www.youtube.com/watch?v=aH5leMWFBxI There are couple of interesting comments being made. The fact that China is going to fully open its automobile market to western companies shows how confident China feels about their own vendors. And I m not much impressed about Tesla as I am about the tiny car revolution happening in China. India, if it wanted to, could learn many lessons from China. Even the electric buses they had started in 2010 itself where people in our auto industry thought it was all a fad. Sadly, we are missing most of the technology and if and by the time Tesla starts a production line, dunno where we could get our lithium. India hasn t been as aggressive as other countries when it comes to securing raw natural resources in other countries, as some other countries have.  Even besides that, it has been tough when you have so many people who still believe that ICE vehicles (Internal Combustion Engines) are better than EV s and even if they know they choose to believe the propaganda. Couple of months ago a young UK girl who had died due to asthma, an inquest found that air pollution was a factor. The girl s name was Ella Adoo-Kissi-Debrah. This is the first case where the doctor ruled air pollution as a chief factor in a person s death. Probably first of its kind of ruling anywhere in the world. I also shared TCO studies between BEV and ICE vehicles done by people who are considering an electric vehicle but those studies seem to fall on deaf ears.

Starlink This is another of Elon Musk s ventures and would be a money spinner for sure in around the globe. While Starlink has asked TRAI for permission, I don t think they will get it. There is also Bharti Global s Oneweb which probably has a better chance of getting permissions. The reason is censorship. As shared above, India is now a leader in Internet shutdowns and do see this trend only accelerate rather than go the other way around. For people who don t remember, remember how satellite phones were made illegal even though only businessmen could afford it. And this was just 5 years ago. As shared Oneweb would have better shot as they would accept all Government directives without a second thought. Unless Starlink gives a binding to the Govt. to be a willing partner when it wants to have internet shutdowns, it will not work. Now how Elon approaches that is to be seen and known. FWIW, you can t access Starlink webpage on BSNL broadband. My broadband provider gives at the most 300 kbps and sometimes, at late nights or early mornings, around 500 kbps.

Farmer Protests
Lastly, farmer protests have entered 100 days. In the interim, Vivek Kaul, an economist took stock of the Bihar APMC to see if things have really worked as the Govt. supporters were telling. The investigation and the results didn t inspire the confidence as the Govt. said. The sad part is though, that nowadays nobody, at least those in power as well as those who are supporters are keen to read, understand and even argue otherwise. They are all happy with whatsapp knowledge. Till date 200+ people have died in the farmer protests. All mainstream media houses have stopped talking about farmers in the hope that they will disappear. At the end of the day the Govt. wants that the corporates should win at whatever the cost.

Next.

Previous.